Shorter timelines don’t clearly warrant less caution about infohazards
I have started to worry that some people are reaching premature conclusions about what AGI timelines imply for biosecurity.
One example is infohazards, and how cautious we should be about them. A claim I’ve heard twice (from two fairly influential people in biosecurity) is that shorter timelines warrant less caution about infohazards. I think the reasoning might be this:
For any piece of information I’m considering releasing, shorter timelines imply that AIs will disclose this information sooner
If AIs disclose this information sooner, then my withholding it delays its dissemination by a shorter period of time
During a shorter period of delay, the piece of information has less time to accrue risk, and so the delay averts less risk
Conclusion: We have less reason to withhold this piece of information
I think that is too quick. The key issue is with the third premise: “Delaying its dissemination by a shorter period of time means we avert less risk.” Importantly, if we have shorter timelines, the risk from biological attack per unit time arguably increases, as AIs would provide greater uplift to enterprising bad actors. And so, shorter timelines potentially concentrate the same (or more) biological risk into a shorter period of time. As a result, delaying a piece of information by 1 month under short timelines could be as valuable, or more valuable than delaying the same piece of information by 2 months under longer timelines.
I therefore do not think that shorter timelines generally warrant less caution about a given infohazard.
Shorter timelines don’t clearly warrant less caution about infohazards
I have started to worry that some people are reaching premature conclusions about what AGI timelines imply for biosecurity.
One example is infohazards, and how cautious we should be about them. A claim I’ve heard twice (from two fairly influential people in biosecurity) is that shorter timelines warrant less caution about infohazards. I think the reasoning might be this:
For any piece of information I’m considering releasing, shorter timelines imply that AIs will disclose this information sooner
If AIs disclose this information sooner, then my withholding it delays its dissemination by a shorter period of time
During a shorter period of delay, the piece of information has less time to accrue risk, and so the delay averts less risk
Conclusion: We have less reason to withhold this piece of information
I think that is too quick. The key issue is with the third premise: “Delaying its dissemination by a shorter period of time means we avert less risk.” Importantly, if we have shorter timelines, the risk from biological attack per unit time arguably increases, as AIs would provide greater uplift to enterprising bad actors. And so, shorter timelines potentially concentrate the same (or more) biological risk into a shorter period of time. As a result, delaying a piece of information by 1 month under short timelines could be as valuable, or more valuable than delaying the same piece of information by 2 months under longer timelines.
I therefore do not think that shorter timelines generally warrant less caution about a given infohazard.